242 research outputs found

    Exploiting Parameters Learning for Hyper-parameters Optimization in Deep Neural Networks

    Get PDF
    In the last years, the Hyper-parameter Optimization (HPO) research field has gained more and more attention. Many works have focused on finding the best combination of the Deep Neural Network's (DNN's) hyper-parameters (HPs) or architecture. The state-of-the-art algorithm in terms of HPO is Bayesian Optimization (BO). This is because it keeps track of past results obtained during the optimization and uses this experience to build a probabilistic model mapping HPs to a probability density of the objective function. BO builds a surrogate probabilistic model of the objective function, finds the HPs values that perform best on the surrogate model and updates it with new results. In this work, a system was developed, called Symbolic DNN-Tuner which logically evaluates the results obtained from the training and the validation phase and, by applying symbolic tuning rules, fixes the network architecture, and its HPs, therefore improving performance. Symbolic DNN-Tuner improve BO applied to DNN by adding an analysis of the results of the network on training and validation sets. This analysis is performed by exploiting rule-based programming, and in particular by using Probabilistic Logic Programming (PLP)

    Rule-based Programming for Building Expert Systems: a Comparison in the Microbiological Data Validation and Surveillance Domain

    Get PDF
    Abstract In this work, we compare three rule-based programming tools used for building an expert system for microbiological laboratory data validation and bacteria infections monitoring. The first prototype of the system was implemented in KAPPA-PC. We report on the implementation and performance by comparing KAPPA-PC with two other more recent tools, namely JESS and ILOG JRULES. In order to test each tool we realized three simple test applications capable to perform some tasks that are peculiar of our expert system

    Unfolding-Based Process Discovery

    Get PDF
    This paper presents a novel technique for process discovery. In contrast to the current trend, which only considers an event log for discovering a process model, we assume two additional inputs: an independence relation on the set of logged activities, and a collection of negative traces. After deriving an intermediate net unfolding from them, we perform a controlled folding giving rise to a Petri net which contains both the input log and all independence-equivalent traces arising from it. Remarkably, the derived Petri net cannot execute any trace from the negative collection. The entire chain of transformations is fully automated. A tool has been developed and experimental results are provided that witness the significance of the contribution of this paper.Comment: This is the unabridged version of a paper with the same title appearead at the proceedings of ATVA 201

    Acinetobacter baumannii in intensive care unit: A novel system to study clonal relationship among the isolates

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The nosocomial infections surveillance system must be strongly effective especially in highly critic areas, such as Intensive Care Units (ICU). These areas are frequently an epidemiological epicentre for transmission of multi-resistant pathogens, like <it>Acinetobacter baumannii</it>. As an epidemic outbreak occurs it is very important to confirm or exclude the genetic relationship among the isolates in a short time. There are several molecular typing systems used with this aim. The Repetitive sequence-based PCR (REP-PCR) has been recognized as an effective method and it was recently adapted to an automated format known as the DiversiLab system.</p> <p>Methods</p> <p>In the present study we have evaluated the combination of a newly introduced software package for the control of hospital infection (VIGI@ct) with the DiversiLab system. In order to evaluate the reliability of the DiversiLab its results were also compared with those obtained using f-AFLP.</p> <p>Results</p> <p>The combination of VIGI@ct and DiversiLab enabled an earlier identification of an <it>A. baumannii </it>epidemic cluster, through the confirmation of the genetic relationship among the isolates. This cluster regards 56 multi-drug-resistant <it>A. baumannii </it>isolates from several specimens collected from 13 different patients admitted to the ICU in a ten month period. The <it>A. baumannii </it>isolates were clonally related being their similarity included between 97 and 100%. The results of the DiversiLab were confirmed by f-AFLP analysis.</p> <p>Conclusion</p> <p>The early identification of the outbreak has led to the prompt application of operative procedures and precautions to avoid the spread of pathogen. To date, 6 months after the last <it>A. baumannii </it>isolate, no other related case has been identified.</p

    Efficient Process Model Discovery Using Maximal Pattern Mining

    Get PDF
    In recent years, process mining has become one of the most important and promising areas of research in the field of business process management as it helps businesses understand, analyze, and improve their business processes. In particular, several proposed techniques and algorithms have been proposed to discover and construct process models from workflow execution logs (i.e., event logs). With the existing techniques, mined models can be built based on analyzing the relationship between any two events seen in event logs. Being restricted by that, they can only handle special cases of routing constructs and often produce unsound models that do not cover all of the traces seen in the log. In this paper, we propose a novel technique for process discovery using Maximal Pattern Mining (MPM) where we construct patterns based on the whole sequence of events seen on the traces—ensuring the soundness of the mined models. Our MPM technique can handle loops (of any length), duplicate tasks, non-free choice constructs, and long distance dependencies. Our evaluation shows that it consistently achieves better precision, replay fitness and efficiency than the existing techniques

    Economic and organizational impact of a clinical decision support system on laboratory test ordering

    Get PDF
    Background:: We studied the impact of a clinical decision support system (CDSS) implemented in a few wards of two Italian health care organizations on the ordering of redundant laboratory tests under dierent perspectives: (1) analysis of the number of tests, (2) cost analysis, (3) end-user satisfaction before and after the installation of the CDSS. Methods:: (1) and (2) were performed by comparing the ordering of laboratory tests between an intervention group of wards where a CDSS was in use and a control group where a CDSS was not in use; data were compared during a 3-month period before (2014) and a 3-month period after (2015) CDSS installation. To measure end-user satisfaction, a questionnaire based on POESUS was administered to the medical sta. Results:: After the introduction of the CDSS, the number of laboratory tests requested decreased by 16.44% and costs decreased by 16.53% in the intervention group, versus an increase (+3.75%) in the number of tests and of costs (+1.78%) in the control group. Feedback from practice showed that the medical sta was generally satised with the CDSS and perceived its benets, but they were less satised with its technical performance in terms of slow response time. Conclusions:: The implementation of CDSSs can have a positive impact on both the eciency of care provision and health care costs. The experience of using a CDSS can also result in good practice to be implemented by other health care organizations, considering the positive result from the rst attempt to gather the point of view of end-users in Italy

    Studying transaction fees in the Bitcoin Blockchain with probabilistic logic programming

    Get PDF
    In Bitcoin, if a miner is able to solve a computationally hard problem called proof of work, it will receive an amount of bitcoin as a reward which is the sum of the fees for the transactions included in a block plus an amount inversely proportional to the number of blocks discovered so far. At the moment of writing, the block reward is several orders of magnitude greater than the sum of transaction fees. Usually, miners try to collect the largest reward by including transactions associated with high fees. The main purpose of transaction fees is to prevent network spamming. However, they are also used to prioritize transactions. In order to use the minimum amount of fees, users usually have to find a compromise between fees and urgency of a transaction. In this paper, we develop a probabilistic logic model to experimentally analyze how fees affect confirmation time and miner's revenue and to predict if an increase of average fees will generate a situation when the miner gets more reward by not following the protocol

    Learning hierarchical probabilistic logic programs

    Get PDF
    Probabilistic logic programming (PLP) combines logic programs and probabilities. Due to its expressiveness and simplicity, it has been considered as a powerful tool for learning and reasoning in relational domains characterized by uncertainty. Still, learning the parameter and the structure of general PLP is computationally expensive due to the inference cost. We have recently proposed a restriction of the general PLP language called hierarchical PLP (HPLP) in which clauses and predicates are hierarchically organized. HPLPs can be converted into arithmetic circuits or deep neural networks and inference is much cheaper than for general PLP. In this paper we present algorithms for learning both the parameters and the structure of HPLPs from data. We first present an algorithm, called parameter learning for hierarchical probabilistic logic programs (PHIL) which performs parameter estimation of HPLPs using gradient descent and expectation maximization. We also propose structure learning of hierarchical probabilistic logic programming (SLEAHP), that learns both the structure and the parameters of HPLPs from data. Experiments were performed comparing PHIL and SLEAHP with PLP and Markov Logic Networks state-of-the art systems for parameter and structure learning respectively. PHIL was compared with EMBLEM, ProbLog2 and Tuffy and SLEAHP with SLIPCOVER, PROBFOIL+, MLB-BC, MLN-BT and RDN-B. The experiments on five well known datasets show that our algorithms achieve similar and often better accuracies but in a shorter time
    corecore